Current:Home > ContactArtificial intelligence is gaining state lawmakers’ attention, and they have a lot of questions -Mastery Money Tools
Artificial intelligence is gaining state lawmakers’ attention, and they have a lot of questions
View
Date:2025-04-16 12:52:06
HARTFORD, Conn. (AP) — As state lawmakers rush to get a handle on fast-evolving artificial intelligence technology, they’re often focusing first on their own state governments before imposing restrictions on the private sector.
Legislators are seeking ways to protect constituents from discrimination and other harms while not hindering cutting-edge advancements in medicine, science, business, education and more.
“We’re starting with the government. We’re trying to set a good example,” Connecticut state Sen. James Maroney said during a floor debate in May.
Connecticut plans to inventory all of its government systems using artificial intelligence by the end of 2023, posting the information online. And starting next year, state officials must regularly review these systems to ensure they won’t lead to unlawful discrimination.
Maroney, a Democrat who has become a go-to AI authority in the General Assembly, said Connecticut lawmakers will likely focus on private industry next year. He plans to work this fall on model AI legislation with lawmakers in Colorado, New York, Virginia, Minnesota and elsewhere that includes “broad guardrails” and focuses on matters like product liability and requiring impact assessments of AI systems.
“It’s rapidly changing and there’s a rapid adoption of people using it. So we need to get ahead of this,” he said in a later interview. “We’re actually already behind it, but we can’t really wait too much longer to put in some form of accountability.”
Overall, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills this year. As of late July, 14 states and Puerto Rico had adopted resolutions or enacted legislation, according to the National Conference of State Legislatures. The list doesn’t include bills focused on specific AI technologies, such as facial recognition or autonomous cars, something NCSL is tracking separately.
Legislatures in Texas, North Dakota, West Virginia and Puerto Rico have created advisory bodies to study and monitor AI systems their respective state agencies are using, while Louisiana formed a new technology and cyber security committee to study AI’s impact on state operations, procurement and policy. Other states took a similar approach last year.
Lawmakers want to know “Who’s using it? How are you using it? Just gathering that data to figure out what’s out there, who’s doing what,” said Heather Morton, a legislative analysist at NCSL who tracks artificial intelligence, cybersecurity, privacy and internet issues in state legislatures. “That is something that the states are trying to figure out within their own state borders.”
Connecticut’s new law, which requires AI systems used by state agencies to be regularly scrutinized for possible unlawful discrimination, comes after an investigation by the Media Freedom and Information Access Clinic at Yale Law School determined AI is already being used to assign students to magnet schools, set bail and distribute welfare benefits, among other tasks. However, details of the algorithms are mostly unknown to the public.
AI technology, the group said, “has spread throughout Connecticut’s government rapidly and largely unchecked, a development that’s not unique to this state.”
Richard Eppink, legal director of the American Civil Liberties Union of Idaho, testified before Congress in May about discovering, through a lawsuit, the “secret computerized algorithms” Idaho was using to assess people with developmental disabilities for federally funded health care services. The automated system, he said in written testimony, included corrupt data that relied on inputs the state hadn’t validated.
AI can be shorthand for many different technologies, ranging from algorithms recommending what to watch next on Netflix to generative AI systems such as ChatGPT that can aid in writing or create new images or other media. The surge of commercial investment in generative AI tools has generated public fascination and concerns about their ability to trick people and spread disinformation, among other dangers.
Some states haven’t attempted to tackle the issue yet. In Hawaii, state Sen. Chris Lee, a Democrat, said lawmakers didn’t pass any legislation this year governing AI “simply because I think at the time, we didn’t know what to do.”
Instead, the Hawaii House and Senate passed a resolution Lee proposed that urges Congress to adopt safety guidelines for the use of artificial intelligence and limit its application in the use of force by police and the military.
Lee, vice-chair of the Senate Labor and Technology Committee, said he hopes to introduce a bill in next year’s session that is similar to Connecticut’s new law. Lee also wants to create a permanent working group or department to address AI matters with the right expertise, something he admits is difficult to find.
“There aren’t a lot of people right now working within state governments or traditional institutions that have this kind of experience,” he said.
The European Union is leading the world in building guardrails around AI. There has been discussion of bipartisan AI legislation in Congress, which Senate Majority Leader Chuck Schumer said in June would maximize the technology’s benefits and mitigate significant risks.
Yet the New York senator did not commit to specific details. In July, President Joe Biden announced his administration had secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before releasing them.
Maroney said ideally the federal government would lead the way in AI regulation. But he said the federal government can’t act at the same speed as a state legislature.
“And as we’ve seen with the data privacy, it’s really had to bubble up from the states,” Maroney said.
Some state-level bills proposed this year have been narrowly tailored to address specific AI-related concerns. Proposals in Massachusetts would place limitations on mental health providers using AI and prevent “dystopian work environments” where workers don’t have control over their personal data. A proposal in New York would place restrictions on employers using AI as an “automated employment decision tool” to filter job candidates.
North Dakota passed a bill defining what a person is, making it clear the term does not include artificial intelligence. Republican Gov. Doug Burgum, a long-shot presidential contender, has said such guardrails are needed for AI but the technology should still be embraced to make state government less redundant and more responsive to citizens.
In Arizona, Democratic Gov. Katie Hobbs vetoed legislation that would prohibit voting machines from having any artificial intelligence software. In her veto letter, Hobbs said the bill “attempts to solve challenges that do not currently face our state.”
In Washington, Democratic Sen. Lisa Wellman, a former systems analyst and programmer, said state lawmakers need to prepare for a world in which machine systems become ever more prevalent in our daily lives.
She plans to roll out legislation next year that would require students to take computer science to graduate high school.
“AI and computer science are now, in my mind, a foundational part of education,” Wellman said. “And we need to understand really how to incorporate it.”
___
Associated Press Writers Audrey McAvoy in Honolulu, Ed Komenda in Seattle and Matt O’Brien in Providence, Rhode Island, contributed to this report.
veryGood! (211)
Related
- DeepSeek: Did a little known Chinese startup cause a 'Sputnik moment' for AI?
- Feel Like You're Addicted To Your Phone? You're Not Alone
- Instagram Debuts New Safety Settings For Teenagers
- Rihanna, Ana de Armas, Austin Butler and More Score First-Ever Oscar Nominations
- Apple iOS 18.2: What to know about top features, including Genmoji, AI updates
- 'Startup Wife' Satirizes Tech Culture And Boardroom Sexism — From Experience
- A Look at All the Celeb Couples Who Had to Work Together After Breaking Up
- Your Facebook Account Was Hacked. Getting Help May Take Weeks — Or $299
- Federal Spending Freeze Could Have Widespread Impact on Environment, Emergency Management
- Israel says rockets fired from Lebanon and Gaza after second night of clashes at Jerusalem's Al-Aqsa mosque
Ranking
- Whoopi Goldberg is delightfully vile as Miss Hannigan in ‘Annie’ stage return
- See The Crown's Twist on Prince William and Kate Middleton's College Meeting
- Oof, Y'all, Dictionary.com Just Added Over 300 New Words And Definitions
- The Grisly True Story Behind Scream: How the Gainesville Ripper Haunted a Whole College Town
- Senate begins final push to expand Social Security benefits for millions of people
- CBP One app becomes main portal to U.S. asylum system under Biden border strategy
- Let Jamie Lee Curtis' Simple, Fuss-Free Red Carpet Glam Inspire Your Next Evening Look
- Stranger Things' Grace Van Dien Steps Back From Acting After Alleged Sexual Harassment
Recommendation
The Grammy nominee you need to hear: Esperanza Spalding
Lyft And Uber Prices Are High. Wait Times Are Long And Drivers Are Scarce
The White House Blamed China For Hacking Microsoft. China Is Pointing Fingers Back
Shawn Mendes and Sabrina Carpenter Leave Miley Cyrus' Album Release Party Together
Warm inflation data keep S&P 500, Dow, Nasdaq under wraps before Fed meeting next week
The FBI Keeps Using Clues From Volunteer Sleuths To Find The Jan. 6 Capitol Rioters
All the Details on E!'s 2023 Oscars Red Carpet Experience
A dog named Coco is undergoing alcohol withdrawal at a shelter after his owner and canine friend both died: His story is a tragic one